Information bounds for Gibbs samplers

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Information bounds for Gibbs samplers

If we wish to eeciently estimate the expectation of an arbitrary function on the basis of the output of a Gibbs sampler, which is better: deterministic or random sweep? In each case we calculate the asymptotic variance of the empirical estimator, the average of the function over the output, and determine the minimal asymptotic variance for estimators that use no information about the underlying...

متن کامل

Adaptive Gibbs samplers

We consider various versions of adaptive Gibbs and Metropoliswithin-Gibbs samplers, which update their selection probabilities (and perhaps also their proposal distributions) on the fly during a run, by learning as they go in an attempt to optimise the algorithm. We present a cautionary example of how even a simple-seeming adaptive Gibbs sampler may fail to converge. We then present various pos...

متن کامل

Geometric Ergodicity of Gibbs Samplers

Due to a demand for reliable methods for exploring intractable probability distributions, the popularity of Markov chain Monte Carlo (MCMC) techniques continues to grow. In any MCMC analysis, the convergence rate of the associated Markov chain is of practical and theoretical importance. A geometrically ergodic chain converges to its target distribution at a geometric rate. In this dissertation,...

متن کامل

Partially Collapsed Gibbs Samplers: Theory and Methods

Ever increasing computational power along with ever more sophisticated statistical computing techniques is making it possible to fit ever more complex statistical models. Among the popular, computationally intensive methods, the Gibbs sampler (Geman and Geman, 1984) has been spotlighted because of its simplicity and power to effectively generate samples from a high-dimensional probability distr...

متن کامل

Adaptive Gibbs samplers and related MCMC methods

We consider various versions of adaptive Gibbs and Metropoliswithin-Gibbs samplers, which update their selection probabilities (and perhaps also their proposal distributions) on the fly during a run, by learning as they go in an attempt to optimise the algorithm. We present a cautionary example of how even a simple-seeming adaptive Gibbs sampler may fail to converge. We then present various pos...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: The Annals of Statistics

سال: 1998

ISSN: 0090-5364

DOI: 10.1214/aos/1024691464